111 research outputs found

    Classification accuracy increase using multisensor data fusion

    Get PDF
    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc

    Alphabet-based Multisensory Data Fusion and Classification using Factor Graphs

    Get PDF
    The way of multisensory data integration is a crucial step of any data fusion method. Different physical types of sensors (optic, thermal, acoustic, or radar) with different resolutions, and different types of GIS digital data (elevation, vector map) require a proper method for data integration. Incommensurability of the data may not allow to use conventional statistical methods for fusion and processing of the data. A correct and established way of multisensory data integration is required to deal with such incommensurable data as the employment of an inappropriate methodology may lead to errors in the fusion process. To perform a proper multisensory data fusion several strategies were developed (Bayesian, linear (log linear) opinion pool, neural networks, fuzzy logic approaches). Employment of these approaches is motivated by weighted consensus theory, which lead to fusion processes that are correctly performed for the variety of data properties

    The TerraSAR-X Traffic Monitoring System

    Get PDF
    The presentation gives an overview about the TerraSAR-X traffic monitoring project at DLR. The tasks of the "traffic processor" and the overall ground segment are described. Results from first airborne campaigns are presented including the monitoring of a traffic jam on a motorway near Munich / Germany. Furthermore radar cross sections of passenger cars are presented

    Multi-resolution, multi-sensor image fusion: general fusion framework

    Get PDF
    Multi-resolution image fusion also known as pansharpening aims to include spatial information from a high resolution image, e.g. panchromatic or Synthetic Aperture Radar (SAR) image, into a low resolution image, e.g. multi-spectral or hyper-spectral image, while preserving spectral properties of a low resolution image. A signal processing view at this problem allowed us to perform a systematic classification of most known multi-resolution image fusion approaches and resulted in a General Framework for image Fusion (GFF) which is very well suitable for a fusion of multi-sensor data such as optical-optical and optical-radar imagery. Examples are presented for WorldView-1/2 and TerraSAR-X data

    Detection of traffic congestion in airborne SAR imagery

    Get PDF
    Detection of traffic congestion is an important issue both for the transportation research community and everyday life of the motorists. Remote sensing sensors installed on aircrafts or satellites enable information collection for various traffic applications over large areas. Optical systems are already in use but are limited due to their daylight operation and cloud-free conditions requirements. Synthetic aperture radar (SAR) systems seem to be more promising due to their all-weather capability. We approach the traffic congestion detection problem with a two-channel SAR airborne sensor flying in along-track (ATI) the motorway by combining various techniques: look processing, channel balancing, coherent difference, e.g. displaced phase center array (DPCA), image processing and incorporation of a priori information such as traffic flow model and road network. The potential of the proposed method is demonstrated with airborne E-SAR data collected during the flight campaign over a highway near Munich

    Radar Signatures of a Passenger Car

    Get PDF
    The upcoming new SAR satellites like TerraSAR-X and Radarsat-2 offer high spatial image resolution and dual receive antenna capabilities which open new opportunities for world-wide traffic monitoring applications. If the radar cross section of the vehicles is strong enough they can be detected in the SAR data and their speed can be measured. For a system performance prediction and algorithm development it is therefore indispensable to know the radar cross section of typical passenger cars. The geometry parameters which have to be considered are the radar look direction, incidence angle and the vehicle orientation. In this paper the radar signatures of non-moving or parking cars are presented. They are measured experimentally from airborne E-SAR data, which have been collected during flight campaigns in 2005 and 2006 with multiple over-flights at different aircraft headings. The radar signatures could be measured for the whole range of aspect angles from 0º to 180º and with high angular resolution due to the large synthetic aperture length of the E-SAR radar sensor. The analysis for one type of passenger cars and particular incidence angles showed that the largest radar cross section values and thus the greatest chance of detection of the vehicles appears when the car is seen from the front, the back and from the side. Radar cross section values for slanted views are much lower and are therefore less suitable for car detection. The measurements have been performed in X-Band (9.6GHz), VV-polarization and at incidence angles of 41.5º and 42.5º. The derived radar signature profile can also be used for the verification of radar cross section simulation studies

    Traffic Classification And Speed Estimation In Time Series Of Airborne Optical Remote Sensing Images

    Get PDF
    In this paper we propose a new two level traffic parameter estimation approach based on traffic classification into three classes: free flow, congested and stopped traffic in image time series of airborne optical remote sensing data. The proposed method is based on the combination of various techniques: change detection in two images, image processing such as binarization and filtering and incorporation of a priori information such as road network, information about vehicles and roads and finally usage of traffic models. The change detection in two images with a short time lag of several seconds is implemented using the multivariate alteration detection method resulting in a change image where the moving vehicles on the roads are highlighted. Further, image processing techniques are applied to derive the vehicle density in the binarized and denoised change image. Finally, this estimated vehicle density is related to the vehicle density, acquired by modelling the traffic flow for a road segment. The model is derived from traffic classification, a priori information about the vehicle sizes and road parameters, the road network and the spacing between the vehicles. Then, the modelled vehicle density is directly related to the average vehicle speed on the road segment and thus the information about the traffic situation can be derived. To confirm our idea and to validate the method several flight campaigns with the DLR airborne experimental wide angle optical 3K digital camera system operated on a Do-228 aircraft were conducted. Experiments are carried out to analyse the performance of the proposed traffic parameter estimation method for highways and main streets in the cities. The estimated speed profiles coincide qualitatively and quantitatively well with the reference measurements

    Detection of Traffic Congestion in SAR Imagery

    Get PDF
    Detection of traffic congestion is an important issue both for the transportation research community and everyday life of the motorists. A new type of information is needed for a more efficient use of road networks. Remote sens-ing sensors installed on aircrafts or satellites enable data collection for various traffic applications over large ar-eas, especially areas not covered with other, e.g. terrestrial sensors, or difficult accessible areas. Synthetic aper-ture radar (SAR) systems seem to be very promising due to their all-weather capability. We approach the traffic congestion detection problem with a two-channel SAR sensor flying in along-track (ATI) the motorway by com-bining various techniques: look processing, channel balancing, coherent change detection, e.g. displaced phase center array (DPCA), image processing and incorporation of a priori information such as traffic model and road network. The potential of the proposed method is demonstrated with airborne E-SAR data collected during sev-eral campaigns over highways in Germany. Method application for future tight formation of space satellites is discussed

    Fusion of optical and radar remote sensing data: Munich city example

    Get PDF
    Fusion of optical and radar remote sensing data is becoming an actual topic recently in various application areas though the results are not always satisfactory. In this paper we analyze some disturbing aspects of fusing orthoimages from sensors having different acquisition geometries. These aspects are errors in DEM used for image orthorectification and existence of 3D objects in the scene. We analyze how these effects influence a ground displacement in orthoimages produced from optical and radar data. Further, we propose a sensor formation with acquisition geometry parameters which allows to minimize or compensate for ground displacements in different orthoimages due the above mentioned effects and to produce good prerequisites for the following fusion for specific application areas e.g. matching, filling data gaps, classification etc. To demonstrate the potential of the proposed approach two pairs of optical-radar data were acquired over the urban area – Munich city, Germany. The first collection of WorldView-1 and TerraSAR-X data followed the proposed recommendations for acquisition geometry parameters, whereas the second collection of IKONOS and TerraSAR-X data was acquired with accidental parameters. The experiment fully confirmed our ideas. Moreover, it opens new possibilities for optical and radar image fusion
    corecore